perm filename ALLHEU[DIS,DBL]2 blob sn#223402 filedate 1976-07-07 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00028 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00005 00002	.ASEC(AM's Heuristics)
C00009 00003	.ASSEC(Heuristics for dealing with Anything)
C00014 00004	.ASSECP(Heuristics for dealing with Any-concept)
C00027 00005	. ASSSEC(Heuristics for the Examples facets of Any-concept)
C00056 00006	. ASSSEC(Heuristics for the Conjecs facet of Any-concept)
C00064 00007	. ASSSEC(Heuristics for the Analogies facet of Any-concept)
C00072 00008	. ASSSEC(Heuristics for the Genl/Spec facets of Any-concept)
C00095 00009	. ASSSEC(Heuristics for the View facet of Any-concept)
C00097 00010	. ASSSEC(Heuristics for the In-dom/ran-of facets of Any-concept)
C00099 00011	. ASSSEC(Heuristics for the Definition facet of Any-concept)
C00103 00012	.ASSECP(Heuristics for dealing with any Active concept)
C00119 00013	.ASSECP(Heuristics for dealing with any Predicate)
C00124 00014	.ASSECP(Heuristics for dealing with any Operation)
C00143 00015	.ASSECP(Heuristics for dealing with any Composition)
C00155 00016	.ASSEC(Heuristics for dealing with any Insertions)
C00156 00017	.ASSEC(Heuristics for dealing with the operation Coalesce)
C00163 00018	.ASSEC(Heuristics for dealing with the operation Canonize)
C00174 00019	.ASSEC(Heuristics for dealing with the operation Substitute)
C00178 00020	.ASSEC(Heuristics for dealing with the operation Restrict)
C00180 00021	.ASSEC(Heuristics for dealing with the operation Invert)
C00182 00022	.ASSEC(Heuristics for dealing with Logical combinations)
C00184 00023	.ASSEC(Heuristics for dealing with Objects)
C00186 00024	.ASSEC(Heuristics for dealing with Structures)
C00190 00025	.ASSEC(Heuristics for dealing with Ordered-structures)
C00193 00026	.ASSEC(Heuristics for dealing with Unordered-structures)
C00194 00027	.ASSEC(Heuristics for dealing with Multiple-eles-structures)
C00195 00028	.ASSEC(Heuristics for dealing with Sets)
C00196 ENDMK
C⊗;
.ASEC(AM's Heuristics)

.QQ

To the extent that a professor of music at a conservatoire can assist his students
in becoming familiar with the patterns of harmony and rhythm,
and with how they combine, it must be possible to assist students in becoming
sensitive to patterns of reasoning and how they combine. The analogy is not
far-fetched at all

--  Dijkstra

.ESS


This appendix  lists all the  heuristics with  which AM is  initially
provided.    They are  organized  by concept,  most  general concepts
first.  Within a concept, they are organized into four  groups:

.BN

⊗8#⊗* Fillin: rules for filling in new entries on various facets.

⊗8#⊗* Check:  rules  for  patching  up existing  entries  on  various
facets.

⊗8#⊗*  Suggest: rules  which propose  new tasks  to break  AM out  of
stagnant loops.

⊗8#⊗*  Interest: criteria for estimating the interestingness of various entities.

.E

Each heuristic is presented in English translation. Whenever there is
a very tricky, non-obvious, or brilliant translation  of some English
clause into LISP,  a brief note will follow about  how that is coded.
Also, for each rule,  is listed some example(s)  of its use, and  its
overall importance.   If the  rule is actually  used within  the main
portion  of this  document, the  relevant  page(s) are  listed within
square brackets.  Concepts which  have no heuristics are not  present
in this appendix.

Hundreds of heuristics were planned on paper but never coded (e.g., those dealing
with proof techniques,
those dealing with the drives and rewards of generalized message senders/receivers),
and whole classes of rules were coded but never used 
by AM during any of its runs (e.g., how to
deal with contradictions, how to deal with Intu's facets).
Such superfluous rules will not be included here.

.ONCE TURN ON "{}"

The rule numbering is referred to occasionally in other appendices.
The total numbe  of rules in AM is actually higher, since many rules
are present but never used, and many rules listed with one number here
are really several rules in LISP (e.g., see rules {[3]QUADRULE} and
{[3]DOUBRULE}).

.ALLHEU: ASECNUM ;
.ASSEC(Heuristics for dealing with Anything)

All these rules deal with any item X, be it concept, atom, event, etc.
These rules are about as general -- and as ⊗4weak⊗* -- as one can imagine.

⊗5Suggest⊗*

.BH

λλ If AM has recently referenced entity X,

Then boost the priority of any tasks involving X.

.ES


.BH

λλ If the user has recently referred to X,

Then boost the priority of any tasks involving X.

.E


The above two rules simply reaffirm the idea of "focus of attention".
The boost in ratings is only slight, and only temporary (it decays toward zero
exponentially
with time). Besides this gradual decline in task ratings,
the rule below explicitly 
modulates this boosting, so that infinite loops can be avoided.

.BH

λλ If AM has recently dealt with X with poor results,

Then lower the priority rating of all tasks involving X.

.ES

.BH

λλ If AM just referenced X and almost succeeded, but not quite,

Then look for a very similar entity Y, and retry the activity with Y in place of X.

.E

There is a separate  precise meaning for "almost succeed", "similar entity", and
"retry" for each kind of entity and activity that might be involved. For example,
if the activity were a ⊗4task⊗* (say to fill in examples of Odd-primes)
and the entity
X was a ⊗4concept⊗* (in this case, Odd-primes), 
then a similar entity might be the concept Odd-numbers, and in that case
the result of this rule would be
⊗4a new task⊗* (to fill in examples of Odd-numbers).
If the failure occurred while AM was trying to access the examples facet of Primes,
with X=Examples, then a similar entity might be the Boundary-examples facet, and
the rule would suggest that AM access intead the Boundary-examples facet of Primes.
Of course, this rule is so weak that it is not often of much help.


.BH

λλ If 
space is running out, and
AM has not referenced X for  a long time, and X is taking up a lot of space,
and no important conjectures reference X,

Then X may be forgotten and its space liberated.
Probably the user should be informed of this, at least tersely.

.E

Just a general-purpose directive for emergency garbage-collection.

⊗5 Interest⊗*

.A4ANYTHING: SSECNUM;

.A4ANYTHINGI: PAGE;

.BH

λλ Any entity X 
is interesting if it is referred to in several interesting conjectures.

.ES

.BH

λλ Any entity X 
is interesting if it is related (via a rare, interesting relation) to another
entity which arose in a very different way and is not obviously tied to X.

.E

Unexpected connections are worth closer examination, typically.

.BH

λλ Entity X is interesting if there is an analogy in which
X corresponds to Y, and Y has turned out to be very interesting.

.ES

.ASSECP(Heuristics for dealing with Any-concept)

This concept has a huge number of heuristics. For that reason, I have partitioned
off -- both here and in AM itself$$
Thus the LISP program has a separate concept called "Examples-of-any-concept",
another concept called "Definitions-of-any-concept", etc.
$ -- the heuristics which apply to each kind of facet. 

.A4ANYBINGF: PAGE;

.ASSSEC(Heuristics for any facet of Any-concept)

The first set of heuristics we'll look at are very general, applying to no
particular facet exactly.

⊗5 Fillin⊗*

.BH

λλ When trying to fill in facet F of concept C, for any C and F,

If C is analogous to concept X, and X.F has some entries,

Then try to construct the analogs of those entries, and see if they
are really valid entries for C.F.

.E

Recall that "C.F" is shorthand for "facet F of concept C". This rule simply
says that if an analogy exists between two concepts C and X,
then it may be strong enough to map entries on X.F into entries for C.F.
any randomly-chosen facet.
There is an analogy between Sets and Bags, and AM uses the above rule to turn
the extreme example of Sets -- the empty set -- into the extreme kind of bag.


⊗5 Suggest⊗*

.BH

λλ If the F facet of concept X is blank, 

Then consider trying to fill it in. 

.E

.NEWCEX: BNN;

The above super-weak rule will rsult in a new task being added to the agenda.
The rating of the task will depend on the Worth of the concept X, and the
overall worth of the facet F which is blank.

.BH

λλ While trying to fill in facet F of concept C, for any C and F,
if C is known to be similar to some other concept D, except for difference d,

Then try to fill in C.F by selecting items from D.F for which d is nonexistent.

.E

.LOOKDIF: BNN;

This rule is made more specific when F is actually known, and hence 
the format of
d is actually determined. For example, if C=Reverse-at-all-levels, F=examples,
then (at one particular moment) a note is found on the Ties facet of
concept C which says that C is just like the concept D=Reverse-top-level,
except C also recurs on the nonatomic elements of its arguments, whereas D doesn't.
Thus d is made null by choosing examples of D for which there are no nonatomic
elements. 

.BH

λλ After dealing with concept C,

Slightly, temporarily boost 
the priority value
of each existing task which involves an Active concept
whose domain or range is C.

.E

.FROB1: BNN;

This is done efficiently using the In-dom-of and In-ran-of facets of C.
A typical usage was after checking in examples of Bags, when AM slightly
boosted the rating of filling in examples of Bag-union, and this task
just barely squeaked through as the next one to be chosen.
Note that the rule reinforced that task twice, since both domain and range
of Bag-union are bags.



⊗5 Check⊗*


.BH

λλ When checking facet F of concept C, (for any F and C,)

Prune away at the entries there  until the facet's size is reduced to
the size which C merits.

.E

The algorithm for doing this is as follows: The Worth of C is multiplied by
the overall worth of facet type F. This is normalized in two ways, yielding
the maximum amount of list cells that C.F may occupy, and also yielding the
maximum number of separate entries to keep around on C.F. If either limit
is being exceeded, then an entry is plucked at random (but weighted to favor
selection from the rear of the facet) and excised. This repeats as long as
C.F is oversized. As space grows tight, the normalization weights decline.

.BH 

λλ When checking facet F of concept C,

Eliminate redundant entries.

.E

Although it might conceivably mean something for an entry to occur twice,
this was never desirable for the set of facets which each AM concept possessed.


⊗5 Interest⊗*

The interest features apply to tell how interesting a concept is, and are
rarely subdivided by relevant facet. That is, most of the reasons that
Any concept might be interesting will be given below.

.BH

λλ A  concept X is interesting if X.Conjecs contains some interesting entries.

.ES

.A4ANYB: SSECNUM; 

.A4ANYBINGI: PAGE;


.BH

λλ A concept is interesting if its boundary accidentally coincides with
another, well-known, interesting concept.

.E

The boundary of a concept means the items which just barely fall into
(or just barely miss satisfying) the definition of that concept. Thus the
boundary of Primes might include 1,2,3,4.
If the boundary of Even numbers includes numbers differing by at most 1 from
an even number, then clearly their boundary is ⊗4all⊗* numbers. Thus it coincides
with the already-known concept Numbers, and this makes Even-nos more interesting.
This expresses the property we intuitively understand as: no number is very far
from an even number.

.BH

λλ A concept is interesting if its boundary accidentally coincides with
the boundary of another, very different, interesting concept.

.E

Thus, for example, Primes and Numbers are both a little more interesting
since the extreme cases of numbers are all boundary cases of primes.
Even numbers and Odd numbers both have the same boundary, namely Numbers.
This is a tie between  them, and slightly raises AM's interest in both concepts.

.BH

λλ A concept is interesting if it is -- accidentally -- precisely the
boundary of some other, interesting concept.

.E

In the case mentioned for the above rule, Numbers is raised in interest
because it turns out to be the boundary for even and odd numbers.


.BH

λλ A concept is boring if, after several attempts, only a couple
examples are found.

.E

Another rule indicates, in such situations, that the concept may be
forgotten  and replaced by some conjecture.

.BH

λλ Concept C is interesting if some normally-inefficient operation F
can be efficiently performed on C's.

.E

Thus it is very fast to perform Insert of items into lists because
(i) no pre-existence checking need be done (as with sets and osets),
and (ii) no ordered merging need be done (as with bags).
So "Lists" is an interesting concept for that reason, according to the 
above rule.

.BH

λλ Concept C is interesting if each example of C accidentally seems to
satisfy the otherwise-rarely satisfied predicate P, or (equivalently)
if there is an unusual conjecture involving C.

.E


This is almost a primitive affirmation of intererestingness.


.BH

λλ Concept C is interesting if C is closely related to the very interesting
concept X.

.E


This is intererestingness by association. AM was interested in
Divisors-of because it was closely related to TIMES, which had
proven to be a very interesting concept.

.BH

λλ Concept C is interesting if there is an analogy in which
C corresponds to Y, and the analogs of the Interest features of Y
indicate that C is interesting.

.E

This might have been a very useful rule, if only there had been more
decent analogies floating around the system. As it was, the rule was
rarely used to advantage. It essentially says that the analogs of
Interest criteria are themselves (probably) valid criteria.

.BH

λλ A concept C is interesting if one of its gneralizations or specializations
turns out to be unexpectedly very interesting.

.E

"Unexpected" means that the interesting property hadn't already been observed for
C. If C is interesting in some way, and then one of its generalizations is
seen to be interesting
in exactly
the same way, then that is "expected". 
It's almost more interesting if the second concept unexpectedly ⊗4lacks⊗*
some fundamental property about C. At least in that case AM might learn something
about what gives C that property. In fact, AM has this rule:

.BH

λλ If concept C possesses some very interesting property lacked by one of its
specializations S,

Then both C and S become slightly more interesting. 

.E; ONCE TURN ON "{}"

In the LISP program, ths is closely linked with rule {[3]INTERMES}.

. ASSSEC(Heuristics for the Examples facets of Any-concept)

The following heuristics are used for dealing with the many kinds of
examples facets which a concept can possess: non-examples, boundary examples, 
Isa links, etc.

⊗5 Fillin⊗*

.BH

λλ To  fill in  examples of  X, where  X is a  kind of  Y (for  some more
general concept Y),

Inspect the examples of Y; some of them may be examples of X as well.

.E

.LOOKGEX: BNN;

For the task of filling in Empty-structures, AM knows that concept is
a specialization of Structures, so it looks over all the then-known
examples of Structures. Sure enough, a few of them are empty (satisfy
Empty-structures.Defn).
Similarly, for the task of filling in  examples of Primes, this rule would  have
AM notice  that Primes is a  kind of Number, and  therefore look over
all the known examples of Number.

.BH

λλ To fill in non-examples of concept X,

Search the specializations of X. Look at all their non-examples.
Some of them may turn out to be non-examples of X as well.

.E

This rule is the counterpart of the last one, but for ⊗4non⊗*-examples.
As expected, this was less useful than the preceding positive rule.

.BH

λλ If the current task is to fill in examples of any concept X,

Then one way to get them is to symbolically instantiate a definition of X.

.E

.INDEF: BNN;

That rule simply says to use some known tricks, some hacks, to wring examples
from a declarative definition. One trick AM knows about is to plug already-known
examples of X into the recursive step of a definition. Another trick is simply to
try to instantiate the base step of a recursive definition.

.BH

λλ If the current task is to fill in non-examples of concept X,

Then one fast way to get them is to pick any random item, any example of Anything,
and check that it fails X.Defn.

.E

This is an affirmation that for any concept X, most things in the universe will
probably not be X's. This rule was almost never used, since there is no "reason"
for remembering the resultant non-examples it produces.

.BH


λλ 
To fill in examples of concept X,

If X.View tells how to view a Z as if it were an X, 
and some examples of Z are known,

Then just run X.View on those examples, 
and check that the results really are X's.

.E

Thus examples of osets were found by viewing other known examples of structures
(e.g., examples of sets) as if they were osets.

.BH

λλ 
To fill in examples of concept X,

Find an operation whose range is X,$$ or at least INTERSECTS X. Use the In-ran-of 
facets and the
rippling mechanism to find such an operation. $
and find examples of that operation being applied.

.E

To fill in examples  of Even-nos, this rule might have  AM notice the
operation Double. Any example of Double will contain an example of an
even number as its value; e.g., <3→6> contains the even number 6.

.BH

λλ If the current task is to fill in examples of concept X,

One bizarre way is to specialize X, adding a strong constraint to X.Defn,
and then look for example  of that new specialization.

.E

Like the classical "insane heuristic"$$
A harder task might be easier to do. A stronger theorem might be easier
to prove. See [Bledsoe...], page...
$, this sounds crazy but works
embarassingly often. If I ask you to find numbers having a prime
number of divisors, the rate at which you find them will probably be lower
than if I'd asked you to find numbers with precisely 2 divisors.
The ⊗4variety⊗* of examples will suffer, of course.
The converse of this heuristic -- for non-examples -- was deemed too unaesthetic
to feed to AM.

.BH

λλ To fill in examples of X,

One inefficient method is to examine random examples of Anything,
checking each by running X.Defn to see if it is an X.
Slightly better is to riple outward from X in all directions,
testing all the examples of the concepts encountered.

.E

This is blind generate-and-test, and was (luckily) not needed much by AM.

.BH

λλ To find more examples of X,
when a nice big example is known, and X has a recursive definition,

Try to plug the known example into the definition and produce a
simpler one. Repeat this until an extreme is reached.

.E

For example, AM had a definition of a set as
"Set(S) if S={} or if Set(Remove-random-element(S))."
When AM found the big example {A,B,{{C},D},{{{E}}},F} by some other
means, it used the above rule and on  he recursive definition to
turn this into {A,B,{{{E}}},F} by removing the randomly-chosen third element.
{A,B,F} was produced next, followed by {B,F} and {F}. After that, {} was
produced and the rule relinquished control.

.BH

λλ To find examples of X,
when X has a recursive definition,

One method with low success rate but high payoff is to try to invert that
definition, thereby creating a procedure for generating new examples.

.E

.INVDEF: BNN;

Usng the previous example, AM was able to turn the recursive definition
of a set into the program "Insert-any-random-item(S)", which turns any set
into a (usually different and larger) new set.
Since the rules which AM uses to do these transformations are very special-purpose,
they are not worth detailing here. This is one 
very managable
open problem, where someone might
spend some months and create a decent body of definition-inversion rules.
A typical rule AM has says "Removing an x and testing that P(x) is inverted
to finding any random x for which P(x) holds then inserting x".
The class of definitions which can be inverted using AM's existing rules
is quite small; whenever AM needed to be able to invert another particular
definition, the author simply supplied whatever rules would be required.

.BH

λλ While filling in examples of C,

if two constructs x and y are found which are very similar yet only one of which
is an example of the concept C,

Then create a boundary example and boundary non-example, by slowly
transforming x and y into each other. 

.E

Thus when AM notices that {a} and {a,b,a} are similar yet not both sets,
it creates {a,b}, {b,a}, {a,a} and sees which are and are not examples of
sets. In this way, some boundary items (both examples and non-examples) are
created. The rules for this slow transformation are again special purpose.
They examine the difference between the items x and y, and suggest operators
(e.g., Deletion) which will reduce that difference. This GPS-like strategy
has been well studied by others, and its inferior
implementation inside AM
will not be detailed.

.BH

λλ If the main task now is to fill in examples of concept C,

Consider all the examples of "first cousins" of C. Some of them might
be examples of C as well.

.E

By "first cousins", we mean all direct specializations of all direct
generalizations of a concept, or vice versa. That is, going up once
along a Genl link, and then down once along a Spec link (or going
down one link and then up one link).

.BH

λλ If the main task now is to fill in boundary (non-)examples of concept C,

Consider all the boundary (non-)examples of "first cousins" of C. Some of them might
lie on the boundary of C as well.

.E

If they turn out not to be boundary examples, they can be recorded as boundary
non-examples, and vice versa.

.BH

λλ To fill in Isa links of concept 
X, (that is, to find a list of concepts of which X is an
example),

Just ripple down the tree of concepts, applying a definition of each concept.
Whenever a definition fails, don't waste time trying any of its specializations.
The Isa's of X are then all the concepts tried whose definitions passed X.

.ES


⊗5 Suggest⊗*

.BH

λλ If some (but not most) examples  of X are also examples of Y (for some
concept Y),

and some (but not most) examples of Y are also examples of X,

Create a new concept defined as the intersection of those 2  concepts
(X and Y). This will be a specialization of both concepts.

.E

If you hapen to notice that some  primes are palindromic,  this rule would suggest
creating a
brand new  concept, defined as the set of  numbers which are both palindromic
and prime. 
AM never actually noticed this, since it represented all numbers in unary.

.BH

λλ If very few examples of X are found,


Then add  the following task  to the agenda: "Generalize  the concept
X",  for the following reason:  "X's are quite  rare; a slightly less
restrictive concept might be more interesting".

.E

Of course, AM  contains a precise meaning for  the phrase "very few".
When AM looks  for primes
among examples of already-known kinds of numbers,
it will find  dozens of non-examples for every  example of a prime it
uncovers.   "Very few" is  thus naturally  implemented as a
statistical confidence level.
AM uses this rule when very few examples of Equality are found readily.

.BH

λλ If very many examples of X are found in a short period of time,

Then try to create a new, specialized version of X.

.E

.SPEASY: BNN;

This is similar to the preceding rule.
Since numbers are easy to find, this might cause us to look for certain
more interesting subclasses of numbers to study.

.BH

λλ If there are no known examples for the interesting concept X,

Then consider spending some time looking for such examples.

.E

I've heard of a math student who defined a set of number which had quite
marvelous properties. After the 20↑t↑h incredible theorem about them he'd proved,
someone noticed that the set was empty. The danger of unwittingly dealing with a
vacuous concept is even worse for a machine than for a human mathematician. The
above rule explicitly prevents that.

.BH

λλ If the totality of examples of concept C is too small to be interesting,

Then consider these alternative: (i) generalize C; (ii) forget C completely;
(iii) replace C by one conjecture.

.E

This is a good example of when a task like ⊗6"Fill in generalizations of 
Numbers-with-α1-divisors"⊗* might get proposed with a high-priority reason.
The class of entities which C encompasses is simply too small, too trivial
to be worth maintaining a separate concept. When C is numbers-with-α1-divisor,
C is really just another disguise for the singleton set {1}. The above rule might
cause a new task to be added to the agenda, ⊗6Fill in generalizations of
Numbers-with-α1-divisor⊗*. When that task is executed, AM might create the
concept Numbers-with-odd-no-of-divisors, Numbers-with-prime-number-of-divisors, etc.
Besides generalizing that concept, the above rule gives AM two other alternatives.
AM may simply obiterate the
nearly-vacuous concept, perhaps leaving around just the statement "⊗41 is the
only number with one divisor⊗*". That conjecture might be tacked onto
the Conjecs facet of Divisors-of. The actual rule will specify criteria for
deciding which of the three alternatives to try. In fact, 
AM really starts all three activities:
a task will always be created and added to the agenda (to generalize C),
the vacuous concept will be tagged as "forgettable", and AM will attempt to
formulate a conjecture (the only items satisfying C.Defn are C.Exs).

.BH

λλ If the totality of examples of concept C is too large to be interesting,

Then consider these alternative: (i) specialize C; (ii) forget C completely;
(iii) replace C by one conjecture.

.E

This is analogous to the preceding rule, but is used far less frequently.
One common use is when a disjunction of two concepts has been formed which is
accidentally large or already-known (e.g., "Evens ∪ Odds" would be replaced by a
conjecture).

.BH

λλ After filling in examples of C, if some examples were found,

Look at all the operations which can be applied to C's 
(that is, access C.In-dom-of),
find those which are interesting but which have no known examples,
and suggest that AM fill in examples for them, because some items are
now known which are in their domain, namely C.Exs.

.E

This rule had AM fill in examples of Set-insertion, as soon as some
examples of Sets had been found.

.BH

λλ After filling in examples of C, if some examples were found,

Consider the task of Checking the examples facet of concept C.

.E

This was very frequently used during AM's runs.

.BH

λλ After checking examples of C, if many examples remain,

Consider the task of Fillin in C.Conjecs.

.E

This was used often by AM. After checking the examples of C, AM would try to
empirically formulate some interesting conjecture about C.

.BH

λλ After successfully filling in non-examples of X, if no examples exist,

If AM has not recently tried to find examples of X, then it should do so.

If AM has recently tried and failed to find examples, consider the conjecture
that X is vacuous, empty, null, always-False. Consider generalizing X.

.ES

.BH

λλ After trying in vain to find some non-examples of X, if many examples exist,

Consider the conjecture
that X is universal, always-True. Consider specializing X.

.ES

.BH

λλ After successfully filling in examples of X, if no non-examples exist,

If AM has not recently tried to find non-examples of X, 
then it should consider doing so.

If AM has recently tried and failed to find non-examples, consider the conjecture
that X is universal, always-True. Consider specializing X.

.ES

⊗5 Check⊗*

.BH

λλ If the current task is to Check Examples of concept X,

and (Forsome Y) Y is a generalization of X with many examples,

and all examples of Y (ignoring boundary cases) are also  examples of
X,

Then conjecture that X is really no more specialized than Y,

and Check the truth of this conjecture on boundary examples of Y,

and see  whether Y might  itself turn out  to be no  more specialized
than one of ⊗4its⊗* generalizations.

.E

This  rule  caused  AM, while  checking  examples  of  odd-primes, to
conjecture that ⊗4all⊗* primes were odd-primes.

.BH

λλ If the current task is to Check Examples of concept X,

and (Forsome Y) Y is a specialization of X,

and all examples of X (ignoring boundary cases) are  also examples of
Y,

Then conjecture that X is really no more general than Y,

and Check the truth of this conjecture on boundary examples of X,

and see  whether Y might itself  turn out to be no  more general than
one of ⊗4its⊗* specializations.

.E

This rule is analogous to the preceding one for generalizations.

.BH

λλ When checking boundary examples of a concept C,

ensure that every scrap of C.Defn has been used.

.E

It is often  the tiny details  in the definition  that determine  the
precise boundary.  Thus we must look  carefully to see whether Primes
allows 1 as  an example or not.  A definition like "numbers divisible
only by 1  and themselves" includes 1,  but this definition  doesn't:
"numbers having  precisely 2  divisors".  In  the LISP  program, this
rule contains several hacks (tricks) for checking that the definition
has been stretched to the fullest.  For example: if the definition is
of the form "all x  in X such that...", then pay careful attention to
the boundary of X.  That is, take the  time to access  X.Boundary-exs
and X.Boundary-non-exs, and check them against C.Defn.

.BH

λλ When checking examples of C,

Ensure that each example satisfies C.Defn, and each non-example fails
it.  The precise member of  C.Defn to use can be chosen depending  on
the example.

.E

As described  earlier in the  text, definitions can  have descriptors
which  indicate what kinds of arguments they  might be best for, ther
overall speed, etc.


.BH

λλ When checking examples of C,

If an entry e is rejected (i.e., it is seen to be not an example of C
after all), then remove e from C.Exs and consider inserting it on the
Boundary non-examples facet of C.

.E

There is a  complicated$$ Not necessarily  sophisticated.  First,  AM
accesses the Worth of  C.  From this it determines  how many boundary
non-examples C deserves to keep around (and how many total list cells
it merits). AM compares these quotas with the current number  of (and
size of) entries already listed on C.bdy-non-exs.  The degree of need
of  another entry  there then  sets the  "odds" for  insertion versus
forgetting.  Finally  a random  number  is  computed,  and  the  odds
determine  what range  it must  lie in  for e  to  be remembered.   $
algorithm for deciding  whether to forget  e entirely or  to keep  it
around as a close but not close enough kind of example.

.BH

λλ When checking examples of C, 

After an entry e has been verified as
a bone fide example of C,

Check whether e  is also a valid example of some direct specialization of C.

If it is, then  remove it from C.Exs, and  consider adding it to  the
examples  facet  of that  specialization,  and  suggest the  task  of
Checking examples of that specialization.

.ES

.BH

λλ When checking examples of C,

If an entry e is rejected,

Then check whether e is nevertheless a valid example of some generalization
of C. 

If it is, consider adding it to that concept's boundary-examples facet.

.E

This is similar to the preceding rule.


.BH

λλ When checking non-examples of C, including boundary non-examples,

Ensure that each one fails a definition of C. Otherwise, transfer it to
the boundary examples facet of C.

.ES

.BH

λλ When checking non-examples of C, including boundary non-examples,

After an entry e has been verified as
a bone fide non-example of C,

Check whether e  is also a non-example of some direct generalization of C.

If it is, then  remove it from C.Non-Exs, and  consider adding it to  the
non-examples  facet  of that  generalization,  and  suggest the  task  of
Checking examples of that generalization.

.ES

.BH

λλ When checking (boundary)  non-examples of C,

If an entry e is rejected, that is if it turns out to be an example of C after all,

Then check whether e is nevertheless a non-example of some specialization
of C. 

If it is, consider adding it to that concept's boundary non-examples facet.

.E

This is similar to the preceding rule.

. ASSSEC(Heuristics for the Conjecs facet of Any-concept)

⊗5 Fillin⊗*

When the task is to look around and find conjectures dealing with concept C,
the following general rules may be useful.

.BH

λλ If there is an analogy from X to C, and a nice item in X.Conjecs, formulate and
test the analogous conjecture for C. 

.ES

.BH

λλ Examine C.Exs for regularities. 

.E

What mysteries are lurking in the LISP code for ⊗4this⊗* rule, you ask?
Nothing but a few special-purpose hacks and a few ultra-general hacks.
Hereis slightly more specific rule for you seekers:

.BH

λλ Look at C.Exs. Pick one element at random. Write down statements true
about that example e. Include a list of all concepts of which it is an
example, all Interests features it satisfies, etc. 

Then check each conjecture on this list against all other known examples of C.
If any example (except a boundary example) of C violates a conjecture, discard it.

Take all the surviving conjectures, and eliminate any which trivally follow other
ones.

.E

This is a common way AM uses, to induce a conjecture from one example and test
it on all the rest.
A more sophisticated approach might be to induce it by using a few examples
simultaneously, but I haven't thought of any nontrivial way to do that.
The careful reader will perceive that most of the conjectures AM will derive
using this heuristic will be of the form "X is unexpectedly a specialization
of Y", or "X is unexpectedly an example of Y", etc.
Indeed, most of AM's conjectures are really that simple syntactically.

.BH

λλ Formulate a parameterized conjecture, a "template", which gets slowly
specialized or instantiated into a definite conjecture.

.E

AM has only a few trivial methods for doing this (e.g., introduce a variable
initially and find the constant value to plug in there later). As usual,
they will be omitted here, and the author encourages some research in this
area, to turn out a decent set of general rules for accomplishing this
hypothesis template instantiation. 
The best effort to date along these lines,
in one specific sophisticated
scientific field, is that of META-DENDRAL [Buchanan].

⊗5 Check⊗*

.BH

λλ If a universal conjecture (For all X's, ...) is contradicted by empirical
data, gather the data together and try to find a regularity in those
exceptions.

If this succeeds, 
give the exceptions a name N (if they aren't already a concept),
and rephrase the conjecture (For all X's which are not N's...).
Consider making X-N a new concept.

.E

Again note how "active" this little checking rule can be. It can
patch up nearly-true conjectures, examine data, define nw concepts, etc.

.BH

λλ After verifying a conjecture for concept C,

See if it also holds for related concepts (e.g., a generalization of C).

.ES


There are of course bookeeping details not elplicitly shown above, which are
present in the LISP program. For example, if conjecture X is true for
all specializations of C, then it must be added to C.Conjecs and removed
from the Conjecs facets of each specialization.

⊗5 Suggest⊗*

.BH

λλ If X is probably related to Y, but no definite connection is known,

It's worthwhile looking for such a definite Tie between X and Y.

.E

How might AM know  that X and Y are  only probably related?  X  and Y
may play the  same role in an analogy, (e.g., the singleton bag "(T)"
and "any typical singleton bag"  share many properties), or they  may
both be  specializations of the  same concept  Z (e.g., two  kinds of
numbers), or  they may both have been created in the same unusual way
(e.g.,  Plus  and Times  and  Exponentiation  are  all  creatable  by
⊗4repeating⊗* another operation).

⊗5 Interest⊗*

.BH

λλ A conjecture about X is interesting if X is very interesting.

.ES

.BH

λλ A nonconstructive existence conjecture is interesting.

.E

Thus the unique factorization theorem is judged to be interesting because
it merely guarantees that some factoring will be into primes. If you give
an algorithm for that factoring, then the theorem actually loses its
mystique and (according tothis rule) some of its value. But it increases
in value due to  he next rule.

.BH

λλ A constructive existence conjecture is interesting if it is frequently used.

.ES

.BH

λλ A conjecture C about X is interesting if the origin and the verification of C
for each
specialization of X was quite independent of each other, and preceded its
being noticed applicable to all X's.

.E

This would be even more striking if ⊗4proof⊗* techniques were known, and each
specialized case had a separate kind of proof.
Many number theory results are like this, where there exists a general proof
only for numbers bigger than 317, say, and all smaller numbers are simply
checked individually to make sure they satisfy the conjecture.


. ASSSEC(Heuristics for the Analogies facet of Any-concept)

⊗5 Fillin⊗*

.BH

λλ Consider the analog of each interesting conjecture about any concept involved
centrally in the analogy.

.E

There is a tradeoff (explicitly taken into account in the LISP version of this
rule) between how interesting a conjecture has to be, and
how centrally a concept has to fit into the analogy, in order to determine
what resources AM should be willing to expend to find the analogous conjecture.

Note that this is not a general suggestion of what to do, but a specific
strategy for enlarging the analogy itself. If the new conjecture is verified,
then not only would it be entered under some Conjecs facet, but it would also
go to enlarging the data structure which represents the analogy.

.BH

λλ Let the analogy suggest how to specialize and generalize each concept into
what is at least the analog of a known, very interesting concept.

.E

Like the last rule, this one simply says to use the analogy itself as the
"reason" for exploring certain new entities, in this case new concepts.
When the Bags↔Numbers analogy is made, AM notices that Singleton bags and
Empty bags are  two interesting, extreme specializations of Bags. The above
rule then allows AM to construct and study what we know and love as the numbers
one and zero. The analogy is flawed because there is only one "one", although
there are many different singleton bags. But just as singletons and empty bags
have special properties under bag operations, so do 0,1 under numeric operations.
This was one case where an analogy paid off handsomely.

.BH

λλ If it is desired to have an analogy between concepts X and Y, then look
for two already-known analogies between X↔Z and Z↔Y, for any Z. 

If found,
compose the two analogies and see if the resultant analogy makes sense.

.ES

Since the analogies are really just substitutions, composing them has a familiar,
precise meaning. This rule was never really used by AM, due to the paucity of
analogies. The user can push AM into creating more of them, and ultimately using
this rule. A chain from X↔Z↔Y↔X can be found which presents a new, bizarre
analogy from X to itself.

 
⊗5 Suggest⊗*

.BH

λλ If an analogy is strong, and one concept has a very interesting
universal conjecture C (For all x in B...), but the analog conjecture C' is false,

Then it's worth constructing the set of items in B' for which the conjecture holds.
It's perhaps even more interesting to isolate the set of exceptional elements.

.E

With the Add↔Times analogy, it's true that all numbers n>1 can be
represented as the sum of two other numbers (each of them smaller than n),
but it is ⊗4not⊗* true that all numbers (with just a couple exceptions)
can be represented as the product of other (hence smaller) numbers. 
The above rule has AM
define the set of numbers which can/can't be so represented. These are
just the composite numbers and the set of primes.
This second way of encountering primes was very unexpected -- both by AM and
by the author. It expresses the deep fact that one difference between Add and
Times is the presence of primes only for multiplication.
At the time of its discovery, AM didn't appreciate this fully of course.

.BH

λλ If space is tight, and no use of the analogy has ever been made,
and it is very old, and it takes up a lot of space,

Then it is permissable to forget it without a trace.

.ES

.BH

λλ If two valuable conjectures are syntactically identical, and can be
made identical by a simple substitution, then tentatively consider the
analogy which is that very substitution.

.E

Thus the associative/commutative property of Add and Times causes them to be
tied together in an analogy, because of this rule.

.BH

λλ If an analogy is very interesting and very complete,

Then spend some time refining it, looking for small exceptions.
If none are found, see whether the two situations are genuinely isomorphic.

.ES

.BH

λλ If concepts X and Y are analogous, look for analogies between their
specializations, or between their generalizations.

.E

This rule is not used much by AM, although the author thought it would be.



⊗5 Interest⊗*

.BH

λλ An analogy which has no discrepancies whatsoever is not as interesting
as a slightly flawed analogy.

.ES

.BH

λλ An analogy is interesting if it associates (for the first time) two concepts
which are each unusally fully filled out (having many conjectures, many examples,
many interest features, etc.).

.ES

. ASSSEC(Heuristics for the Genl/Spec facets of Any-concept)

⊗5 Fillin⊗*

.BH

λλ To  fill in  specializations of X, if it was very easy to find examples of X,

Grab some features which would indicate than an X was interesting (some entries
from X.Interest, or more remote Interest predicates garnered by rippling), and
conjoin them onto the definition of X, thereby creating a new concept.

.E

Here's one instance where the above rule was used:
It was so easy for AM to produce examples of sets that it decided to specialize
that concept. The above rule then plucked the inteestingness feature
"all pairs of members satisfy the same rare predicate" and conjoined it to the
old definition of Sets. The new concept, Interesting-sets, included all singletons
(because all pairs of members drawn from a singleton satisfy the predicate Equal)
and empty sets.

.BH

λλ To fill in generalizations of concept X,

Take the definition e, and replace it by a known generalization of e.
If e is a concept, use e.Genl; if e is a conjunction, then remove a conjunct
or generalize a conjunct; if e is a disjunction, then add a disjunct or
generalize a disjunct; if e is negated, then specialize the negate; if e is
an example of E, then replace e by "any example of E"; if e satisfies any
property P, then replace e by "anything satisfying P"; if e is a constant,
then replace e by a new variable or an existing one which could assume value e.

.E

This rule contains a bag of tricks for generalizing any LISP predicate, the
definition of any concept. They are all ⊗4syntactic⊗* tricks, however.

.BH

λλ To fill in generalizations of concept X,
If some conjecture exists about "all X's and Y's" or "in X or Y", for some other
concept Y,

Create a new concept, a generalization of both X and Y, defined as their disjunction.

.E

This rule contains another trick  for generalizing any concept, although it is
more meaningful, more semantic than the previous rule's tricks. Many theorems are
true about numbers with 1 or 2 divisors, so this might be one reasonable way to
generalize Numbers-with-α1-divisor into a new useful$$ at least, several theorems
will be stated more concisely using this new concept: Numbers with 1 or 2 divisors.
$ concept.

.BH

λλ To fill in generalizations of concept X,

If other generalizations G1, G2 of X exist but are TOO general, 

Create a new concept, a generalization
of X and a specialization of
both G1 and G2, defined as the conjunction of G1 and G2's definitions.

.E

Thus when AM generalizes Reverse-all-levels into Reverse-top-level and
Reverse-first-element, the above rule causes AM to create a new operation,
which reverses the top level and which reverses the CAR$$
also the CAR of the CAR, etc., until a non-list is encountered. $
of the original list.
While not particularly useful, the reader should observe that it is in
fact midway in generality between the original Reverse function and the
first two generalizations.

.BH

λλ To fill in specializations of concept X,

Take the definition e, and replace it by a known specialization of e.
If e is a concept, use e.Genl; if e is a disjunction, then remove a disjunct
or specialize a disjunct; if e is a conjunction, then add a conjunct or
specialize a conjunct; if e is negated, then generalize the negate; if e is
"any example of E", then replace e by a particular example of E; if e is
"anything satisfying P", then replace e by a particular satisfier of P; 
if e is a variable, then replace it by a well-chosen constant.

.E

This rule contains a bag of tricks for specializing any LISP predicate, the
definition of any concept. They are all ⊗4syntactic⊗* tricks, however.

.BH

λλ To fill in specializations of concept X,
If some conjecture exists about "all X's which are also Y's" or "in X and Y", 
for some other
concept Y,

Create a new concept, a specialization of both X and Y, defined as their conjunction.

.E

This rule contains another trick  for specializing any concept, although it is
more meaningful, more semantic than the previous rule's tricks. Many theorems 
about primes contain the condition "p>2"; i.e., they are really
true about primes which are odd. So this might be one reasonable way to
specialize Primes into a new 
concept: by conjoining the definitions of Primes and Odd-numbers, into the 
new concept Odd-primes.

.BH

λλ To fill in specializations of concept X,

If other specializations S1, S2 of X exist but are TOO restrictive to be interesting,

Create a new concept, a specialization
of X and a generalization of
both S1 and S2, defined as the disjunction of S1 and S2's definitions.

.ES

.BH

λλ To fill in generalizations of concept X, when a recursive definition of C exists,

If the definition contains two conjoined recursive calls, replace them by a
disjunction or eliminate one call entirely.

If there is only one recursive call, disjoin a second on another destructive
function applied to the original argument. If the original destructive function is
one of {CAR,CDR}, then let the new one be the other member of that pair.

.E

AM uses this rule to turn Equal-lists into two variants of Same-length-as.


.BH

λλ To fill in specializations of concept X, when a recursive definition of C exists,

If the definition contains two disjoined recursive calls, replace them by a
conjunction or eliminate one call entirely.

If there is only one recursive call, conjoin a second on another destructive
function applied to the original argument. Often the two recursions will be on the
CAR and the CDR of the original argument to the predicate which is the definition 
for X.

.E

This is closely related to the preceding rule.

.BH 

λλ To fill in specializations of concept X,

Find within a definition of X (at even parity of NOT's) an expression of
the form "For some x in X, P(x)", and replace it either by
"For all x in X, P(x)", or by P(x⊗7↓o⊗*).

.E

.QUADRULE: BNN;

Thus "sets, all pairs of whose members satisfy SOME interesting predicate"
gets specialized into "sets, all pairs of whose members satisfy Equality".
The same rule, with "even parity" replaced by "odd parity", is useful for
⊗4generalizing⊗* a definition.
This rule is really 4 separate rules, in the LISP program.
The same rule, with the transformations going in the opposite direction,
is also used for generalizing. The same rule, with the transformations
reversed and the parity reversed, is used for specializing a definition.
Here is that doubly-switched rule:

.BH 

λλ To fill in specializations of concept X,

Find within a definition of X (at odd parity of NOT's) an expression of
the form "For all x in X, P(x)", and replace it either by
"For some x in X, P(x)", or by P(x⊗7↓o⊗*).  Or replace "P(αα)", where αα is a
constant, by "For some x in A, P(x)" where A is a concept of which αα is one
example.

.ES

.BH

λλ When creating in a specialization S of concept C,

Note that S.Genl should contain C, and that C.Spec should contain S.

.E

The analogous rule exists, in which all spec and genl are switched.



⊗5 Suggest⊗*

.BH

λλ After creating a new specialization S of concept C,

Explicitly look for ties between S and other known specializations of C.

.E

For example, after AM defines the new concept of Numbers-with-3-divisors,
it looks around for ties between that kind of number and other kinds
of numbers.

.BH

λλ After creating a new generalization G of concept C,

Explicitly look for ties between G and other close generalizations of C.

.E

For example, AM defined the new predicates Same-size-CARs and Same-size-CDRs$$
Two lists satisfy Same-size-CARs iff they have the same number of members.
Two lists satisfy Same-size-CDRs iff (when written out in standard LISP
notation) they have the same number of initial left parentheses and also
have the same first identifier following that parenthesis. $
as two generalizations of Equality. The above rule then suggested that AM
explicitly try to find some connection between these two new predicates.
Although ⊗4AM⊗* failed, Don Knuth (using a similar heuristic, perhaps) 
also looked for a connection, and found one: it turns out that the two
predicates are both ways of defining the relation we intuitively understand
as "having the same length as".

.BH

λλ After creating a new specialization S of concept C,

Consider looking for examples of S. 

.E

This has to be said explicitly, because all too often a concept is specialized
into vacuousness.


.BH

λλ After creating a new generalization G of concept C,

Consider looking for examples of G. 

.E

This has to be said explicitly, because all too often a concept is generalized
into vacuous universality. THis rule is less useful to AM than the preceding
one.

.BH

λλ If concept C possesses some very interesting property lacked by one of its
specializations S,

Then considering creating a concept intermediate in specialization between C and S,
and see whether that possesses the property.

.E; INTERMES: BNN;

This rule will trigger whenever a new generalization or specialization is created.

.BH

λλ If concept S is now very interesting, and S was created as a specialization of
some earlier concept C,

Give extra consideration to specializing S, and to specializing concept C again
(but in different ways than ever before).

.E

The next rule is the analog of the preceding one. They incorporate tiny bits of
the strategies of hill-climbing and learning from one's successes.

.BH

λλ If concept G is now very interesting, and G was created as a generalization of
some earlier concept C,

Give extra consideration to generalizing G, and to generalizing  C in other ways.

.E

The analogous rules exist, for concepts that have become so boring they've just
been discarded:

.BH

λλ If concept X proved to be a dead-end, and X was created as a generalization of
(specialization of) some earlier concept C,

Give less consideration to generalizing (specializing) G, and to generalizing  
(specializing) C in other ways in the future.

.ES




⊗5 Check⊗*

.BH

λλ When checking a generalization G of concept C,

Specifically test to ensure that G is not equivalent to C.

The easiest way  is to examine the non-examples  (especially boundary
non-examples)  of C, and  look for one  satisfying G;  or examine the
examples of G (esp. boundary) and look for one failing to satisfy C.

If they appear to be the same concept, look carefully at G. Are there
any specializations of G whose examples have never been filled in? If
so,  then by  all means suggest  looking for  such concepts' examples
before concluding that G and C are really equivalent.

.INDENT 8,16,0

If they are the same, then replace one by a conjecture.

If they are different, make sure that some boundary  non-example of C
(which is an example of G) is explicitly stored for C.

.E

.RESERVE: BNN;

.RESERVEP: PAGE;

This  rule makes  sure  that AM  is  not deluding  itself.   When  AM
generalizes               Numbers-with-α1-divisor                into
Numbers-which-equal-their-no-of-divisors, it still hasn't gotten past
the  singleton set {1}. The  conjecture in this case  would be "⊗4The
only  number which  equals  its  own  number  of  divisors  is  1⊗*".
Typically, when a generalization G of C turns out to be equivalent to
C, there is theorem lurking around, of the form "All G's also satisfy
this  property...", where  the  property  is the  "extra"  constraint
present in  C's definition but absent  from G's.  This  rule also was
used when AM had just found some examples of Sets. AM almost believed
that  all Unordered-Structures  were  also Sets,  but  the last  main
clause  of the  rule had  AM notice that  Bagsis a  specialization of
Unordered-structures which had never  had any of its examples  filled
in. As a result, AM printed  out this message: "Almost concluded that
Unordered-structures  are  also always  Sets.   But  will  wait until
examples of Bags are found.  Perhaps some Bags will not be  Sets." In
fact, examples of Bags are soon found, and they aren't sets.

.BH

λλ When checking a specialization S of concept C,

Specifically test to ensure that S is not equivalent to C.

.INDENT 8,16,0

If they are the same, then replace one by a conjecture.

If  they are different,  make sure  that some  boundary example  of C
(which is not an example of S) is explicitly stored for C.

.E

This rule is similar to the preceding one. If adding a new constraint
P to  the  definition doesn't  change the  concept C,  then there  is
probably a theorem there of the form "All C's also satisfy constraint
P".

.BH


λλ When checking a specialization S of a specialization X of a concept C,

if there exist other specs. of specs. of C, 

then ensure that none of them are the
same as S.
This is especially worthwh	|e if the specializaing operators in each case were
the same but reversed in order.

.E

.SOFSC: BNN;

Thus we can add a constraint to C and collapse the first two arguments, or we can
collapse the arguments and add the constraint; either way, we get to the same
very specialized new concept. The above rule helps detect those accidental duplicates.


.BH

λλ When checking the Genl or Spec facet entries for concept C,

ensure that C.Genl and C.Spec have no common member Z. If they do, then
conjecture that C and Z are actually equivalent.

.E


⊗5 Interest⊗*


.BH

λλ A generalization of X is interesting if all the previously-known boundary
non-examples are now boundary examples of the concept.

.E

A check is included here to ensure that the new concept was not simply defined
as the closure of the old one.

.BH

λλ A specialization of X is interesting if all the previously-known boundary
examples are now boundary non-examples of the new specialized concept.

.E

A check is included here to ensure that the new concept was not simply defined
as the interior of the old one.

. ASSSEC(Heuristics for the View facet of Any-concept)

⊗5 Fillin⊗*

.BH

λλ To  fill in  View facet entries for X, 

Find an interesting operation F whose range is X,

and indicate that any member of Domain(F) can be viewed as an X just by
running F on it.

.E

While trying to fill in the View facet of Even-nos, AM used this rule. It
located he operation Doubling, whose domain is Numbers and whose range is
Even-nos. Then the rule created a new entry: "to view any number as if it were
an even number, double it". This is a twisted affirmation of the standard 
correspondence
between natural numbers and even natural numbers.


. ASSSEC(Heuristics for the In-dom/ran-of facets of Any-concept)

⊗5 Fillin⊗*

.BH

λλ To fill in In-dom-of facet of concept X,

Ripple down the tree of concepts, starting at Active, to empirically determine
which active concepts can be run on X's.

.E

This can usually be decided by inspecting the Doamin/range facets of the
Active concepts. Occasionally, AM must actually try to run an active on
sample X's, to see whether it fails or returns a value.

.BH

λλ To fill in In-ran-of facet of concept X,

Ripple down the tree of concepts, starting at Active, to empirically determine
which active concepts can be run to yield X's.

.E

This can usually be decided by inspecting the Domain/range facets of the
Active concepts. Occasionally, AM inspects known examples of some Active concept,
to see if any of the results are X's.

.BH

λλ While filling in In-dom-of facet of X,

Look especially carefully for Operations which transform examples and
non-examples into each other; even better boundary exs/non-exs are pushed
"across the boundary".

.E

This was used to note that Insert and Delete had a lot to do with the
concept of Singleton.


. ASSSEC(Heuristics for the Definition facet of Any-concept)

⊗5 Suggest⊗*

.BH

λλ If there are no known definitions for concept X,

Then it is crucial that AM spend some time looking for such definitions.

.E

This situation might occur if only an Algorithm is present for some concept.
In that case, the above rule would suggest a new, high-priority task, and
AM would then twist the algorithm into a (probably very inefficient) definition.

⊗5 Check⊗*

.BH

λλ When checking the Definition facet of concept C,

Ensure that each  member of C.Exs satisfies all  definitions present,
and  each  non-example   fails  all  definitions.  If  there  is  one
dissenting definition, modify it,  and move the offending example  to
the boundary.

.E

There is  little real "checking"  that can  be done to  a definition,
aside   from   internal   consistency   (if   there   exist   several
suposedly-equivalent definitions,  then AM can  at least ensure  they
agree on  the known examples andnon-examples of  the concept). If the
Intuitions facets  were  permitted,  then each  definition  could  be
checked for intuitive rationality.

.BH

λλ When checking the Definition facet of concept C,

Try to find and eliminate any redundant  constraints, try to find and
eliminate any circularity, check that any recursion will terminate.

.E

Here  are  the  other  few tricks  that  AM  knows  for "checking"  a
definition. For each clause above,  AM has a very limited ability  to
detect and  patch up "bugs"  of that  sort.  Checking  that recursion
will terminate, for example, is done by examining the argument to the
recursive call, and verifying that it contains (at  some level before
the  original  argument)  an   application  of  a  LISP  function  on
Destructive-LISP-functions-list. There  is no  intelligent  inference
that is going  on here, and for that  reason the process is  not even
mentioned within the body of this document.

.A4ANYBLAST: SSSECNUM;

.A4ANYBLASTP: PAGE;

.ASSECP(Heuristics for dealing with any Active concept)

All the rules below are applicable to tasks which involve operations,
predicates, relations, functions, etc. In short, they apply to all the
concepts AM knows about which involve ⊗4doing⊗* something, which involve action.

⊗5Fillin⊗*


.BH

λλ If the current task is to fill in examples of the activity
F,

One  way to get them is  to run F on  randomly chosen examples of the
domain of F.

.E

.RUNOP: BNN;

Thus, to find examples of Equality, AM repeatedly executed Equality.Alg
on randomly chosen pairs of objects.
AM found examples of Compositions by actually picking a pair of operations
at random and trying to compose them. Of course, most such "unmotivated" 
compositions turned out to be uninteresting.

.BH

λλ While filling in examples of the activity F, 
by running F.Algs on random arguments from F.Domain,

It is worth the effort to speciafically include extreme or boundary examples
of the domain of F, among the aruments on which F.Algs is run.

.ES

.BH

λλ To fill in a Domain  entry for active concept F,

Run F on various entities, rippling down the tree of concepts, to determine
empirically where F seems to be defined.

.E

This may shock the reader, as it sounds dumb and explosive, but the concepts
are arranged in a tree (using Genl links), so the search is really quite fast.
Although this rule is rarely used, it always seems to give surprisingly good results.

.BH

λλ To fill in generalizations of active F,

Consider just extending F, by enlarging its domain. 
Revise F.Defn as little as possible.

.E

Although Equality is initially only for structures, AM extends it
(using the same definition, actually) to a predicate over all pairs of entities.

.BH

λλ To fill in specializations of active F,

Consider just restricting F, by shrinking its domain. 
Check F.Defn to see if some optimization is possible.

.ES

.BH

λλ After an algorithm is known for F, if AM wants a better one,

AM is permitted to ask the user to provide a fast but opaque algorithm for F.

.E

This was used a few times, especially for inverse functions. A nontrivial
open-ended
research problem (*)$$
Starred problems are quite difficult, and should take the reader roughly
3 full lifetimes to master. Readers not believing in reincarnation should
therefore skip such problems. $
is to collect a body of rules which transform an inefficient
algorithm into a computationally acceptable one.

.BH

λλ If the current task is to fill in boundary (non-)examples of the activity
F,

One  way to get them is  to run F on  randomly chosen boundary examples and
(with proper safeguards) boundary non-examples of the
domain of F.

.E

Proper safeguards are required to ensure that F.Algs doesn't loop or cause an error
when fed a slightly-wrong (slightly-illegal) argument. In LISP, a timer and an
ERRORSET suffice as crude safeguards.

.BH

λλ If the current task is to fill in (boundary) non-examples of the activity F,

One low-interest way to get them is  to run F on  randomly chosen examples of its
domain, and then replace the value obtained by some other (very similar) value.
Also, be sure to check that the resultant i/o pair doesn't accidentally
satisfy F.Defn.

.E

This rule is really two rules: for ⊗4boundary⊗* non-examples, just change the
final value slightly. For typical non-examples, change the result significantly.
That's what the parentheses mean: if you read the words inside in the IF part,
then read the words inside the parentheses in the THEN part as well, ⊗4or⊗*
omit them in both cases.

.DOUBRULE: BNN;

⊗5 Check⊗*

.BH

λλ When checking an algorithm for active F,

run that algorithm and ensure that the input/output satisfy F.Defn.

.ES

.BH

λλ When checking a definition d for active concept F,

Run one of its algorithms and ensure that the input/output satisfy d.

.E

This is the converse of the preceding rule. They simply say that the
definition and algorithm facets must be mutually consistent.

.BH

λλ While checking examples or boundary examples of the active concept F,

Ensure that each input/output pair is consistent with F.Dom/range.

.E

If the domain/range entry is <D1 D2... Dk → R>, and the i/o pair is
<d↓1 d↓2... d↓k , r>, then
each component d↓i of the input must be an example of  the
corresponding Di, and the output r must be an example of R.

.BH

λλ When checking examples of the active concept F, 

If any argument(s) to F were concepts, tag their In-domain-of facets with `F'.

.E



⊗5 Suggest⊗*

.BH

λλ If there are no known algorithms for active concept F,

Then AM should spend some time looking for such algorithms.

.E

This situation might occur if only a Definition is present for some concept.
In that case, the above rule would suggest a new, high-priority task, and
AM would then twist the definition into a (probably very inefficient) algorithm.
The rule below is similar, for the Domain/range facet:


.BH

λλ If the Domain/range facet of active concept F is blank,

Then AM should spend some time looking for specifications of F's domain and range.

.ES

.BH

λλ If a Domain of active F is encountered frequently, either within theorems
or as domain or range of other operations and predicates,

Then define that domain as a separate concept, and raise the Worth of F slightly.

.E

This led to the definition of "Objects⊗7x⊗*Structures", which is the domain
of lot of Insertion and Deletion operations, Membership testing predicates, etc.

.BH

λλ It is worthwhile to explicitly calculate the value of F for
all distinguished (extreme, boundary, interesting) members of 
and subsets of
its domain.

.ES

.BH

λλ If some domain component of F has a very interesting specialization,

Then consider restricting F (along that component) to that smaller domain.

.E

Note that these last couple rules deal with the image of interesting domain
items. The next rule deals with the inverse image (pre-image) of unusual range
items. We saw earlier in this document  (Chapter 2) how this rule led to
the definition of Prime numbers.

.BH

λλ If the range of F contains interesting items, or an interesting specialization,

Then it is worthwhile to consider their inverse image under F.

.ES

.BH

λλ When trying to fill in new Algorithms for Active concept F,

Try to transform any conjectures about F into (pieces of) new algorithms.

.E

This is one place where a sophisticated body of transformation rules might
be inserted. Others are working on this problem [see Darlington], and AM
only contains a few simple tricks for turning conjectures into procedures.
For example, "All primes are odd, except `2'", is transformed into
a more eficient search for primes: a separate test for x=2, followed
by a search through only Odd-numbers.

.BH

λλ After trying in vain to fill in examples of active concept F,

Locate the domain of F, and suggest that AM try to fill in examples
for each component of that domain.

.E

Thus after failing to find examples for Set-union, AM was told to
find examples of Sets, because that could have let the previous task
succeed. There is no recursion here: after the sets are found, AM will
not automatically go back to finding examplls of Set-union. In practice,
that task was eventually proposed and chosen again, and succeeded this time.

.BH

λλ After trying in vain to find some examples of X, if many non-examples exist,

Consider the conjecture
that X is vacuous, null, empty, always-False. Consider generalizing X.

.ES

.BH

λλ After working on an Active concept F, 

Give a slight, ephemeral boost to tasks involving Domain(F).
This wll be a moderate size boost for each task which asks 
to fill in examples of that domain/range component,
and a very tiny boost for each other task mentioning such a concept.

.E

.FROB2: BNN;

This is both a supplement to the more general "focus of attention" rule,
and a nontrivial heuristic for finding valuable new tasks.
It is the partial converse of rule {[3]FROB1}.


⊗5 Interest⊗*

.BH

λλ An active concept F is interesting if there are other operations with the
same domain as F, and if they are (on the average) fairly interesting.
If the other operations' domain is only similar, then they must be
very interesting and have some valuable conjectures
ties to them, if they are to be allowed
to push up F's interestingness rating.

.E

The value of having the same domain/range is the ability to compose with them.
If the domain/range is only similar, then AM can hope for analogies.

.BH

λλ An active concept is interesting if it was recently created.

.E

This is a slight extra boost given to each new operation, predicate, etc.
This bonus decays rapidly with time, and thus so will the overall worth of
the concept, unless some interesting property is encountered quickly.


.BH

λλ An active concept is interesting if its domain is very interesting.

.E

An important common case of this rule is when the domain is interesting
because all its members are equal to each other.

.ASSECP(Heuristics for dealing with any Predicate)

Each of these heuristics can be assumed to be prefaced by a 
claus of the from "If the current task is to deal with concept X,
where X isa Predicate,...". This will be repeated below, for each rule.

⊗5 Fillin⊗*

.BH

λλ If the current task was (Fill-in examples of X),

and X is a predicate,

and more than 100 items are known in the domain of X,

and at least 10 cpu seconds were spent trying to randomly instantiate
X,

and the ratio of successes/failures is both >0 and less than .05

.OOO

Then  add the following task to  the agenda: (Fill-in generalizations
of X), for the following reason:

"X is rarely satisfied; a slightly less restrictive  concept might be
more interesting".

This  reason's  rating  is  computed  as  three times  the  ratio  of
nonexamples/examples found.

.E

This rule says to generalize a predicate if it rarely succeeds (returns T).
One use for this was when Equality was found to be quite rare; the resultant
generalizations did indeed turn out to be more valuable (numbers).
A similar use was found for predicates which tested for identical equality
of two angles, of two triangles, and of two lines. Their generalizations
were also valuable (congruence, similarity, parallel, equal-measure).

.BH

λλ To fill in Domain/range entries for predicate P,

P can operate on the domain of any specialization of P,

P can operate on some subset of the domain of any generalization of P,

The range of P will necessarily be the doubleton set {T,F},

P is guaranteed return T if any of its specializations do, and F if any of its
generalizations do.

.E

This contains a compiled version of what we mean when we say that one predicate is
a generalization or specialization of another. Viewed as relations, as subsets of a
cross-product of spaces, this notion of general/special is just that of superset/subset.
The last line of the rule is meant to indicate that if, say, a generalized
version of P returns F over some entire domain set, then so will P.

⊗5 Suggest⊗*

.BH

λλ If all the values of Active concept F happen to be Truth-values,
and F is not known to be a predicate,

Then conjecture that F is in fact a predicate.

.E

This rule is placed on the Suggest facet because, if placed anywhere else on this
concept, it could only be seen as relevant by AM
if AM already knew that F were a predeicate.
On the other hand, the rule can't be placed, e.g., on Active.Fillin, 
since just forgetting (deleting) this "Predicate"
concept should be enough to
delete all references to predicates anywhere in the system.


⊗5 Interest⊗*

.BH

λλ A predicate P is interesting if its domain is the space of concepts,
especially if it suggests new concepts to study for some expected payoff.

.E

This very high level heuristic wasn't really used by AM during its runs.

.ASSECP(Heuristics for dealing with any Operation)

⊗5Fillin⊗*

.BH

λλ To fill in examples of operation F (with domain A and range B),

when many examples αα of A are already known,

and F maps some of those examples αα into distinguished members (esp: extrema)
b of B,

Then (for each such distinguished member "b"εB) study F-1-(b) as a new concept.
That is, isolate those members of A whose F-value is the unusual item bεB.

.E

This rule says to investigate the inverse image of an unusual
item b, under the interesting operation f. When b=2 and
f=number-of-divisors-of, this rule leads to the definition
of prime numbers.
When b=Phi$$ the empty set, NIL, α{α} $ and f=Intersection, the rule
led to the discovery of the concept of disjointness of sets.

.BH

λλ To fill in Domain/range entries for operation F,

F can operate on the domain of any specialization of F,

F can operate on some subset of the domain of any generalization of F,

F can only (and will always) produce values lying in the range of each generalization of F,

F can -- with the proper arguments -- produce values lying in the range of any
particular specialization of F.

.E

This contains a compiled version of what we mean when we say that one operation is
a generalization or specialization of another. Viewed as relations, as subsets of a
cross-product of spaces, this notion of general/special is just that of superset/subset.
Recall that operations can be multi-valued.

.BH

λλ To fill in Domain/range entries for operation F, when some exist already,

Take an entry of the form <D1 D2... Dn → R> and see if DixR is meaningful
for some i (esp.: i=n). 

If so, then remove Di from the left side of the entry, and replace
R by DixR, and modify the definition of F.

.E

In LISP, "meaningful" is coded as: either DixR is equivalent to an already-known
concept, or else it is found in at least two interesting conjectures. This is
probably an instance of what McDermott call natural stupidity$$ See [McDermott]
for natural stupidity.
He criticizes the use of very intelligent-sounding names for otherwise-simple
program modules. But consider "Homo sapiens", which means "wise man". Now
↓_there's_↓ a misleading label...$.
This rule is tagged as being explosive, and is not used very often by AM.

.BH

λλ To fill in a Range entry for operation F,

Run F on various domain examples,
especially boundary examples, to collect examples of the range.
Then ripple down the tree of concepts to determine
empirically where F seems to be sending its values.

.E

This may shock the reader, as it sounds dumb and explosive, but the concepts
are arranged in a tree (using Genl links), so the search is really quite fast.
Although this rule is rarely used, it always seems to give surprisingly good results.

.BH

λλ If opertion F has just been applied, and has yielded a new concept C as its
result,

Then carefully examine F.Dom/range to try to find out what C.Isa should be.
C.Isa will be all legal entries listed as values of the range of F.

.E

.GETRR: BNN;

.GETRRP: PAGE;

When F=Compose, say AM has just created C=Empty-o-Insert.$$i.e., insert x into
a structure S and then see if S is empty. This leads AM to realize that
inserting can never result in an empty structure. $
What is C? It is a concept, of course, but what else? By examining the Domain/range
facet of Compose, AM finds the entry <Active Active α→ Active>. Aha! So C must be
an Active. But AM also finds the entry <Predicate Active α→ Predicate>.
Since "Empty" is a predicate, the final composition C must also be a predicate.
So C.Isa would be fille  in with "Predicate". AM thus used the above rule to
determine that Empty-o-Insert was a predicate. Evenif this rule were excised,
AM could still determine that fact, painfully, by noticing that all the values
were truth-values.

.BH

λλ If operation F has just been applied to A1,A2,..., 
and has yielded a new concept C as its
result,

Then add F to C.In-ran-of; add C to the In-dom-of facet of all the Ai's which
are concepts; add <A1... α→ C> to  F.Exs.

.ES

.BH

λλ When filling in examples of operation F, if F takes some existing concepts
A1, A2,... and (may) produce a new concept,

Then only consider, as potential A↓i's, those concepts which already have some
examples. Prefer the A↓i's to be interesting, to have a high worth rating,
to have some interesting conjectures about them, etc.

.E

.HAVEX: BNN;

The danger here is of, e.g., Composing two operations which turn out to be vacuous,
or of Conjoining an empty concept onto another, or of prolferating variants of a
boring concept, etc.



⊗5Check⊗*

Below are rules used to check existing entries on various facets of operations.

.BH

λλ 
To check the domain/range entries on the operation F,

IF a domain/range entry has the form (D D D... → R),

and all the D's are equal, and R is a generalization of D,


THEN it's worth seeing whether (D D D... → D) 
is consistent with all known examples of the operation:

.INDENT 12,16,0

If there are no known examples, 
add a task to the agenda requesting they be filled in.

If there are examples, and (D D D... → D) is consistent, add it to the Domain/range
facet of this operation.

If there are some contradicting examples, create a new concept which is defined as
this operation restricted to (D D D... → D).

.E

When AM restricts Bag-union to numbers (bags of T's), the new operation has a
Domain/range entry of the form (Numbers Numbers → Bag). The above rule has AM
investigate whether the range specification mightn't also be narrowed down to
Number. In this case it is a great help. The rule often fails, of course:
the sum of two primes is rarely a prime, the cross-product of two lists-of-atoms
is not a list-of-atoms, etc.
Since this rule is almost instantaneous to execute, it's cost-effective overall.

.BH

λλ When checking the Domain/range entries for operation F,

Ensure that the boundary items in the range can actually be reached by F.
If not, see whether the range is really just some known specialization of F.

.E

This rule is a typical checking rule. Note that it is active, not passive:
it might alter the Domain/range facet of F, it it finds an error there.

.BH

λλ When checking examples of the operation F, for each such example,

If the value returned by F is a concept C, add `F' to C.In-range-of.

.ES


⊗5 Suggest⊗*

.BH

λλ Whenever the domain of operation F has changed,

check whether the range has also changed. Often the range will change
analogously to the domain, where the operation itself is the Analogy.

.E

.BH

λλ After working on Operation F, 

Give a slight, ephemeral boost to tasks involving Range(F).

.E

This wll be a moderate size boost for each task which asks 
to fill in examples of that range concept,
and a very tiny boost for each other task mentioning such a concept.
This is both a supplement to the more general "focus of attention" rule,
and a nontrivial heuristic for finding valuable new tasks.
It is an extension of rule number {[3]FROB2}, and a partial converse
to rule {[3]FROB1}.



⊗5 Interest⊗*

.BH

λλ An operation F is interesting if there are other operations with the
same domain and range, and if they are (on the average) fairly interesting.

.ES

.BH

λλ An operation F is interesting if it the first operation connecting its
domain to its range, and if those domain/range sets are themselves valuable
concepts, and there is no analogy between them, and there are some 
interesting theorems
involving the domain of the operation.

.E

The above two rules say that F can be valuable becuase it's similar to
other, already-liked operations, or because it 
is totally different from any known operation. Although these two criteria
are nonintersecting, their union rpresents only a small fraction of the
operations that get created: typically, ⊗4neither⊗* rule will trigger.

.BH

λλ An operation F is interesting if its domain and range are very interesting.

.E

Domain and range here refer to the concepts from which a legal argument for F
may be drawn, and into which all results of F must lie. They are the D and the
R in the domain/range facet entry <D → R> for concept F.

.BH

λλ An operation F is interesting if the values of F satisfy some unusual property
which is not (in general) satisfied by the arguments to F.

.E

Thus doubling is interesting because it always returns an even number.
This is one case where the interesting property can be deduced trivially just by
looking at the domain and range of the operation: Numbers→Even-nos.


.BH

λλ An operation is interesting if its values are interesting.

.E

.ONCE TURN ON "{}"

This can mean that each value is interesting (e.g., Compose is well-received because
it produces many new, valuable concepts as its values).
Or, it can mean that the operations' values,
gathered together
into one big set, are interesting as a set.
Unlike the preceding rule, this one has no mention whatsoever of the domain items,
the arguments to the operation.
This rule was used to good advantage frequently by AM.
For example, Factorings of numbers are interesting because (using rule
{[3]SRI1}) for all x, Factorings(x) is interesting in exactly the same way.
Namely, Factorings(x), viewed as a set, always contains precisely 
one item which has a
certain interesting property  (see rule {[3]SRI2}).
Namely, all its members are primes (see rule {[3]SRI1} again).
This explains how AM noticed that all numbers seem to factor uniquely into primes.

.BH

λλ An operation is interesting if its values are interesting, ignoring the 
images of boundary items from the domain.

.E

That is, if the image of the domain -- minus its boundary -- is interesting.

.BH

λλ An operation is interesting if its values 
on the boundary items from the domain
are very interesting. Ignore the non-boundary parts of the domain.

.E

That is, if the image of the boundary of the domain is interesting.

.BH

λλ An operation is interesting if it leaves intact any interesting properties
of its argument(s). This is even better if it elimiates some undesirable
properties, or adds some new, desirable ones.

.E

Thus a new, specialized kind of Insertion operation is interesting
if, even though it stuffs more items into a structure, the nice
properties of the structure remain. The operation "Merge" is interesting
for this very reason: it inserts items into an alphabetized list, yet
it doesn't destroy that interesting property of the list.


.BH

λλ An operation is interesting if its domain and range are equal. 
If there is more than one domain component, then at least one of them
should equal the range. The more components equal to the range, the better.

.DRCO: BNN;

.E

Thus "Insertion" qualifies here, since its domain/range entry is <Anything
Structures α→ Structures>.
But "Union" is even better, since both domain components equal the range, namely
Structures.

.BH

λλ An operation is mildly interesting if its range is related somehow (e.g. 
specialization of)
to one or more components of its range. The more the better.

.E

A weakened form of the preceding rule.

.BH

λλ If the result of applying operation F is a new concept C, 

Then the interestingness of F is weakly ties to that of C.

.E

If the new concept C becomes very valuable, then F will rise in interest.
If C is so bad it gets forgotten, F will not be regarded quite as highly.
When Canonize scores big its first time used, it rises in interest. This caused
AM to form poorly-motivated canonizations, which led to dismal results,
which gradually lowered the rating of Canonize to where it was originally.

.ASSECP(Heuristics for dealing with any Composition)

.COMPH: SSECNUM;

.COMPHP: PAGE;

⊗5 Fillin⊗*

.BH

λλ To fill in algorithms for operation F, where F is a composition G-o-H,

One algorithm is: apply H and then apply G to the result.

.E

Of course this rule is not much more than the definition of what it means
to compose two operations.

.BH

λλ To fill in Domain/range entries operation F, where F is a composition G-o-H,

Tentatively assume that the domain is Domain(H), and range is Range(G).
More precisely,  the domain will be the result of substituting
Domain(H) for Range(H) wherever Range(H) appears (or: just once) in Domain(G).

.E

.CDRH1: BNN;

Thus for F=Divides-o-Count, where Divides:<Number,Number → {T,F}>, and
Count:<Bag → Number>, the above rule
would say that the domain/range entries for F might be:
<Bag,Bag → {T,F}>,
<Number,Bag → {T,F}>,
and/or <Bag,Number → {T,F}>.

.BH

λλ To fill in Domain/range entries operation F, where F is a composition G-o-H,
But Range(H) does not occur as a component of Domain(G),

The range of F is still Range(G), but the domain of F is computed as follows:
Ascertain the component X of Domain(G) having the biggest (fractional) overlap
with Range(H). Then substitute Domain(H) for X in Domain(G). The result is
the value to be used for Domain(F).

.E

.CDRH2: BNN;

This rule is a second-order correction to the previous one. If there is
no absolute equality, then a large intersection will suffice. Notice that
F may no longer be defined on all of its domain, even if G and H are.
If identical equality is taken as the maximum possible overlap betwen two
concepts, then this rule can be used to replace the preceding one completely.


.BH

λλ When trying to fill in the Isa entries for a composition F=G-o-H,

Examine G.Isa and H.Isa, and especially their intersection. Some of those
concepts may also claim F as an example. Run their definition facet to see.

.E

.ISARG: BNN;

.ISARGP: PAGE;

To see how this is encoded into LISP, turn to page {[3]ISARGP2}.

.BH

λλ When trying to fill in the Genl or Spec entries for a composition F=G-o-H,

Examine the corresponding facet on G and on H.

.E

This rule is similar to the preceding one, but wasn't as useful or as reliable.

.BH

λλ A satisfactory initial guess at the Worth value of composition F=G-o-H is
the root-sum-of-squares of G.Worth and H.Worth.

.ES



⊗5 Check⊗*

.BH

λλ Check that F-o-G is really not the same as F, or the same as G.
Spend some time checking whether F-o-G is equivalent to any already-known active
concept.

.E

This happens often enough to make it worth stating explicitly. Often, for example,
F will not even bother looking at the result of G.

.BH

λλ When checking the Algorithms entries for a composition F=G-o-H,

If range(H) is not wholly contained in the domain of G, 

then the algorithm must contain a "legality" check, ensuring that H(x) is a valid
member of the domain of G.

.E


⊗5 Suggest⊗*

.BH

λλ Given an interesting operation F:A↑n→A, 

consider composing F with itself.

.E

This may result in more than one new operation. From F=division,
for example, we get the two operations (x/y)/z and x/(y/z).
AM quickly realizes that such variants are really equivalent,
and eventually realizes that F(F(x,y),z)=F(x,F(y,z)) is a common
situation (which we call associativity of F).

.BH

λλ If the newly-formed domain of the composition F=G-o-H contains more
than one occurrence of the concept D, and this isn't true of G or H,

Then consider creating a new operation, a specialization of F,
by Coalescing the domain/range of F, by eliminating one of the D components.

.E

.COAC: BNN;

Thus when Insert-o-Delete is formed, the old Domain/range entries were
both of the form <Anything Structure → Structure>. The newly-created entry
for Insert-o-Delete was <Anything Anything Structure → Structure>; i.e.,
take x, delete it from S, then insert y into S. The above rule had AM
turn this into a new operation, with domain/range <Anything Structure → Structure>,
which deleted x from S and the inserted the very same x back into S.


⊗5 Interest⊗*

.COMI: PAGE;

.BH

λλ A composition F=G-o-H is interesting if G and H are very interesting.

.ES

.BH

λλ A composition F=G-o-H is interesting if F has an interesting property
not possessed by either G or H.

.ES

.BH

λλ A composition F=G-o-H is interesting if F has most of the interesting properties
which are possessed by either G or H.
This is slightly reduced if both G and H possess the property.

.ES

.BH

λλ A composition F=G-o-H is interesting if F lacks any undesirable properties
true of G or H.
This is greatly increased if both G and H possess the bad property, unless
G and H are very closely related to teach other (e.g., H=G,or H=G-1-).

.E

The numeric impact of each of these rules was guessed at initially, and
has never needed tuning. Here is an area where experimentation might prove
interesting.

.BH

λλ A composition F=G-o-H is interesting if F maps interesting subsets of domain(H)
into interesting subsets of range(G).

F is to be judged even 
more interesting
if the image was not thought to be interesting until
after it was explicitly isolated and studied because of part 1 of this very rule.

.E

Here, an "interesting" subset of domain(H) is one so judged by Interests(domain(H)).
A completely different setof criteria will be used to judge the interestingness of
the resultant image under F. Namely, for that purpose, AM will ask
for range(G).Interest, and ripple outwards to look for related interest features.

.BH

λλ A composition F=G-o-H is interesting if F-1- maps interesting subsets of range(G)
into interesting subsets of domain(F).

This is even better if the preimage
wasn't hitherto realized as interesting.

.E

This is the converse of the preceding rule. Again, "interesting" is judged by two
different sets of criteria.

.BH

λλ A composition F=G-o-H is interesting if F maps interesting elements of domain(H)
into interesting subsets of range(G).

λλ A composition F=G-o-H is interesting if F-1- maps interesting elements of range(G)
into interesting subsets of domain(F).

This is even better if the subset is only now seen to be interesting.

.E

This is the analogue of the preceding two rules, but for individual items rather
than for whole subsets of the domain and range of F.

.BH

λλ A composition F=G-o-H is interesting if range(H) is equal to, not just intersects,
one component of domain(G).

.ES

.BH

λλ A composition F=G-o-H is mildly interesting if range(H) is a specialization of
one component of domain(G).

.E

This is a weakened version o the preceding feature. Such a composition is interesting
because it is guaranteed to always be applicable. If Range(H) merely intersects
a domain component of G, then there must be an extra check, after computing
H(x), to ensure it lies within the legal domain of G, before trying to run G on that
new entity H(x).

.BH

λλ A composition F=G-o-H is more interesting if range(G) is equal to a domain component
of H.

.E ONCE TURN ON "{}"

This is over and above the slight boost given to the composition because 
it is an ⊗4operation⊗*
whose domain
and range coincide (see rule {[3]DRCO}).


.ASSEC(Heuristics for dealing with any Insertions)

⊗5 Check⊗*

.BH

λλ When checking an example of any kind of  insertion of x into S,

Ensure that x is a member of S.

.E

The only types of insertions known to AM are ⊗4unconditional⊗* insertions,
so this rule is valid. It is useful for ensuring that a particular new operation
really is an insertion-operation after all!

.ASSEC(Heuristics for dealing with the operation Coalesce)

⊗5 Fillin⊗*

.BH

λλ When coalescing F(a,b,c,...), whose domain/range is <A B C... → R>,

A good choice of two domain components to coalesce is a pair of identically equal
ones. Barring that, choose a pair related by specialization (eliminate the
more general one). Barring that, choose a pair with a common specialization
S.

.E

Thus to coalesce the operation "Insert-o-Delete", which takes two items and a structure,
deletes the first argument from the structure and theninserts the second argument,
AM examine  its Domain/range entry: <Anything Anything Structure → Structure>.
Although it would be legal to collapse the second and third arguments, the above
rule says it makes more sense in general to collapse the first and second. In fact,
in that case, AM gets an operation which tells it something about multiple elements
structures.

.BH

λλ When filling in Algorithms
for a coalesced version G of active concept F,

One natural algorithm is simply to call on F.Algs, with two arguments the same.

.E

Of course the two identical arguments are those which have been decided to be merged.
This will be decided before the definition and algorithm facets are filled in.
Thus a natural algorithm for Square is to call on TIMES.Alg(x,x).
The following rule is similar:

.BH

λλ When filling in Definitions
for a coalesced version G of active concept F,

One natural Definition is simply to call on F.Defn, with two arguments the same.

.ES

.BH

λλ When filling in the Worth of a new coalesced version of F,

A suitable value is the worth of F. 

.E

This is a compromise between the knowledge that the new operation will probably
be less interesting than F, and the knowledge that it may lead to even more
valuable new concepts (e.g., ⊗4its⊗* inverse may be more interesting than F's).




⊗5 Check⊗*

.BH

λλ If G and H are each two coalescings away from F, for any F, 

Then check that G and H aren't really the same, by writing their definitions
out in terms of F.Defn.

.E; ONCE TURN ON "{}"

Thus if R(a,b,c) is really F(a,b,a,c), and S(a,b,c) is really F(a,b,c,c),
and R and S get coalesced again, into G(a,b) whch is R(a,b,a) and into
H(a,b) which is S(a,b,a), then both G and H are really F(a,b,a,a).
The order of coalescing is unimportant.
This is a boost to the more general impetus for checking this sort of thing,
rule {[3]SOFSC}. This rule is faster, containing a special-purpose program
for untangling argument-calls rapidly.

⊗5Suggest⊗*

.BH

λλ If a  newly-interesting  active concept F(x,y)  takes  a pair  of  N's  as
arguments,

Then create  a new  concept, a specialization of F, 
called  F-Itself, taking  just one N  as
argument, defined as F(x,x), with initial worth Worth(F).

If AM has never coalesced F before, this gets a slight bonus value.

If AM has coalesced F before, say into S, then modify this suggestion's value
according to the current worth of S.

The lower the system's interest-threshhold is, the more attactive this suggestion
becomes.

.E

AM used this rule to coalesce many active concepts. Times(x,x) is what we
know as squaring; Equality(x,x) turned out to be the constant predicate True;
Intersect(x,x) turned
out to be the identity operator; 
Compose(f,f) was an interesting "iteration" operator$$ e.g.,
Compose(Compose,Compose) is an operator which
takes 3 operations f,g,h and forms their joint composition f o g o h $; etc.
This rule is really a bundle of little meta-rules modifying one suggestion.
The very last part of the above rule indicates that if the system is desparate,
this is the least distasteful way to "take a chance" on a high-payoff high-risk
course of action. It is more sane than, e.g., randomly composing two operations
until a nice new one is created.

.BH

λλ If concept F takes only one argument,

Then it is not worthwhile to try to coalesce it.

.E

This rule was of little help cpu-timewise, since even if
AM tried to coalesce such an active concept,it would fail almost
instantaneously. The rule did hellp make AM appear smarter, however.

.ASSEC(Heuristics for dealing with the operation Canonize)

⊗5 Fillin⊗*

.BH

λλ If the task is to Canonize predicates P1 and P2 (over AxA)$$ That is,
find a function F such that P1(x,y) iff P2(F(x),F(y)). $, and the difference
between a definition of P1 and definition of P2 is just that P2
performs some extra check that P1 doesn't,

Then F should convert any a⊗6ε**A into a member of A which automatically
satisfies that extra constraint.

.E; ONCE TURN ON "{}"

Thus when P1=Same-length, P2=Equality, A=Lists, th extra constraint tha P2
satisfies is just that it
recurs in the CAR direction: the CARs of the two arguments must also satisfy P2.
P1 doesn't have such a requirement. The above rule then has AM seek out a way to
guarantee that the CARs will always satisfy Equality. A special hand-crafted
piece of knowledge tells AM that since "T=T" is an example of Equality, one
solution is for all the CARs to be the atom T. Then the operation F must contain
a procedure for converting each ember of a structure to the atom "T". 
Thus (A C α{Z A Bα} Q Q) would be converted to (T T T T T).
This rule is a specialized, "compiled" version of the idea expressed in rule
{[3]LOOKDIF}.

.BH

λλ If the task is to Canonize P1 and P2 over AxA, trying to synthesize F,
where A=Structure,


Then perhaps there is a distinguished type of structure B which the argument
to F should always be converted into.
In that case, consider P1 and P2 as two predicates over BxB.

.E

This special-purpose rule is used to guide a series of experiments, to determine
whether P1 is affected by adding multiple copies of existing elements into its
arguments, and whether its value is affected by rearranging some of its arguments'
elements. In  he case of P1=Same-size, the answers are: multiple elements do make
a difference, but rearrangement doesn't. So the canonical type of structure for
F=Size must one which is Mult-eles-allowed and also one which is Unordered.
Namely, a Bag. Thus F is modified so that it first converts it argument to a Bag.
Then Equality and Same-size are viewed as taking a pair of Bags, and Size is
viewed as taking one Bag and giving back a canonical bag.

.BH

λλ In the above situation, when F is created from P1 and P2,

an acceptable value for the worth of F is the maximum of the worths of P1 and P2.

.ES

.BH

λλ IF the current task has just created a canonical specialization C2 of
concept C1, with respect to operations F1  and F2, [i.e., two members
of C2 satisfy F1 iff they satisfy F2],

THEN add the following entry to the Analogies facet of C2:

.TABS 24,29,34 TURN ON "\"

\<C1\F1\Operations-on-and-into(C1)>

\<C2\F2\Those operations restricted to C2's>

.E

This rather incoherent rule says that it's worth taking the trouble to
study the behavior of each operation when it is restricted to working on
standard or "canonical" items. Moreover, some of the old relationships may
carry over -- or at least have analogues -- in this restricted world.
When numbers are discovered as canonical bags, all the bag operations are
restricted to what we know and love as numeric operations. Many of the old
bag-theoretic relationships have numeric analogues. Thus we knew that
the bag-difference of x and x was the empty bag; this is still true for
x a canonical bag, but we would word it as "x minus x is zero".
This is because the restriction of Bag-difference to canonical bags
(bags of T's) is precisely the operation we call subtraction.

.BH

λλ When Canonize works on P1, P2  (two predicates),
and produces an operation, F,

Both predicates must share a common Domain, of the form AxA for some concept A,
and the new operation F can have <A α→ A> as one of its Domain/range entries.

.ES

.BH

λλ In the same situation as the last rule,

One conjecture is that P1 and P2 are equal, when
restricted to working on pairs of Canonical A's (A's which are in F(A),
items x which are the image F(a) of some aεA).

.E

After canonizing Equal and Same-size into the new operation Length,
AM conjectures that two canonical bags are equal iff they have the same size.



⊗5 Suggest⊗*

.BH

λλ When Canonize works on P1, P2, both predicates over AxA,
and produces an operation
F:A→A,

It is worth defining and studying the image F(A); i.e., the totality of
A's which are canonical, already in standard form.
When this new concept Canonical-A is defined, the <A → A> entry on F.Dom/range
can be supplemented by an entry of the form <A → Canonical-A>.

.E

Thus AM studied Canonical-Bags, which turned out to be isomorphic to the
natural numbers.

.BH

λλ If P1 is a very interesting predicate over AxA, for some interesting concept A,

Then look over P1.Spec for some other predicate P2 which is also over AxA, and
which has some interesting properties P1 lacks. For each such predicate P2,
consider applying Canonize(P1,P2).

.ES

.BH

λλ After Canonize works on P1, P2, both predicates over AxA,
producing an operation
F:A→A, and Canonical-A has been defined,

It is worth restricting operations in In-dom-of(A) to Canonical-A.
Some new properties may emerge.

.E

Thus after defining Canonical-Bags, AM looked at Bags.In-dom-of.
In that list was the operation "Bag-union". So AM considered the
restriction of Bag-union to Canonical-bags. Instead of Bag-union
mapping two bags into a new bag, this new operation took two
canonical-bags and mapped them into a new bag. AM later noticed that
this new bag was itself always canonical. Thus was born the operation
we call "Addition".

.BH

λλ After Canonical-A is produced,

It is marginally 
worth trying to restrict operations in In-range-of(A) to map into Canonical-A.

.E

This gives an added boost to picking Union to restrict, since it is in both
Bags.In-dom-of and Bags.In-ran-of.
This rule is much harder to implement, since it demands that the range be restricted.
There are just a few special-purpose tricks AM knows to do this.
Restricting the domain is, by comparison, much cleaner. AM simply creates a new
concept with the same definition, but with a more restricted domain/range facet.
For restricting the range, AM must insert into the definition a check to ensure that
the final result is inside Canonical-A, not just in A. This leads to a very
inefficent definition.

.BH

λλ After Canonical-A is produced,

It is worthwhile to consider filling in examples (especially
boundary) of that new concept. 

.E; ONCE TURN ON "{}"

This is above and beyond the slight push which rule {[3]NEWCEX} gies that task.

.ASSEC(Heuristics for dealing with the operation Substitute)

⊗5 Suggest⊗*

.BH

λλ If two different variables are used to represent the same entity,$$
When we say that x and y represent the same entity, what we really mean
is that they have the same domain of identity (e.g., ∀xεBags) and they
are equally free (there is a precise logical definition of all this, but
there is little point to presenting it here.) $
then substitute one for the other.

This is very important if the two occurrences are within the same
entry on some facet of a single concept, and less so otherwise.

The dominant variable should be the one typed out previously to the
user; barring that, the older usage; barring that, the one closest to 
the letter "a" in the alphabet.

.E

This heuristic was used less often -- and proved less impressive -- than
was originally anticipated by the author. 
Since most concepts were begotten from older ones, they always assumed
the same variable namings, hence there were very few mismatches.
A special test was needed to explicitly check for "x=y" occurring as
a conjunct somewhere, in which case we removed it and substituted y for x
throughout the conjunction.


.BH

λλ If two expressions (especially: two conjectures) are structurally
similar, and appear to differ by a certain substitution,

Then if the substitution is permissable we have just arrived at the same
expression in various ways, and tag it as such,

But if the substitution is not seen to be tautologous, then a new
analogy is born. Associate the constituent parts of both expressions.
This is made interesting if there are several concepts involved which
are assigned new analogues.

.E

The similar statements of the associativity of Add and Times led to this
rule's identifying them as analogous. If AM had been more sophisticated,
it might have eventually formulated some abstract algebra concepts like
"semigroup" from such analogies.

.ASSEC(Heuristics for dealing with the operation Restrict)

⊗5 Fillin⊗*

.BH

λλ When filling in definitions (algorithms)
for a restriction R of the active concept F,

One entry can simply be a call on F.Defn (F.Algs).

.E

Thus one definition of Addition will always be as a call on the old, general operation
`Bag-union.'
Of course one major reason for restricting the domain/range of an activity is that
it can be performed using a faster algorithm! So the above rule was used frequently
but rarely dramatically.

.BH

λλ When creating in a restriction R of the active concept F,

Note that R.Genl should contain F, and that F.Spec should contain R.

.ES

.BH

λλ When creating in a restriction R of the active concept F, 
by restricting the domain or range to some specialization S of its previous
value C,

A viable initial guess for R.Worth is F.Worth, augmented by the difference in
worth between S and C. Hopefully, S.Worth is bigger than C.Worth!

.ES



.ASSEC(Heuristics for dealing with the operation Invert)

⊗5 Fillin⊗*

.BH

λλ When filling in definitions 
for an Inversion I of the active concept F,

One "Sufficent Defn" entry can simply be a blind search through the examples of F.

.E

If we already knew that 4→16 is an example of Square, then AM can use the above rule
to quickly notice that Square-Inverse.Defn(16,4) is true.
This is almost the "essence" of inverting an operation,of course.



⊗5 Suggest⊗*

.BH

λλ After creating an inverted form I of some operation F,

If the only definition and algorithm entries are the "obvious" inefficient ones,

Then consider the task: "Fill in algorithms for I", because the old blind
search is just too awful to tolerate.

.ES

.ASSEC(Heuristics for dealing with Logical combinations)

⊗5 Check⊗*

.BH

λλ The user may sometimes indicate Conjunction when he really mean Sequencing.

.ES

⊗5 Suggest⊗*

.BH

λλ If there is something interesting to say about entities satisfying
the disjunction (conjunction) of two concepts' definitions, 

Then consider creating a new concept defined as that logical combination of 
the two concepts' definitions.

.ES

.BH

λλ Given an implication,

Try to weaken the left side as much as possible without destroying the
validity of the whole implication.
Similarly, try to strenghten the right side of the implication.

.ES

⊗5 Interest⊗*

.BH

λλ A disjunction (conjunction) is interesting if there is a conjecture
which is very interesting yet which cannot be made about any one disjunct
(conjunct).

.E

In other words, their logical combination implies more than any consituent.

.BH

λλ An implication is interesting if the right side is more interesting than
the left side.

.ES



.BH

λλ An implication is interesting if the right side is interesting yet unexpected
based only on assuming the left side.

.ES


.ASSEC(Heuristics for dealing with Objects)

⊗5 Suggest⊗*

.BH

λλ If it is very easy to get examples of structures which are S's,

Then conjoin onto S.Defn a constraint which is garnered from Interests(S).
Thus each example of the new specialization will automatically be an interesting
example of an S type of structure.

.E

Since it's easy to find examplls of Sets, and one feature (see below) which
makes a set interesting is if all pairs of members satisfy predicate P (for
some rare P), then AM actually defined this new concept:
"λ(S). S is a set, and all pairs of members of S satisfy P, for some rare
predicate P. ". This was called "Interesting-sets" by AM. AM later learned
that all singletons were Interesting-sets.

.ASSEC(Heuristics for dealing with Structures)

⊗5 Fillin⊗*

.BH

λλ To fill in examples of a kind of structure S,

Start with an empty S, pluck any other kind of example of structure X,
and transfer members one at a time from the X into the embryonic S.
Finally, check that the result satisfies S.Defn.

.E

This is useful, e.g., to convert examples of lists nto examples of sets.

.BH

λλ To fill in specializations of a kind of structure, 

add a new constraint that each member must satisfy,
or a constraint on all pairs of members,
or a constraint on all pairs of distinct members,
or a constraint which the structure as a whole must satisfy.
Such a constraint is often merely a stipulation of being an example of an X,
for some interesting concept X.

.E

Thus AM might specialize Bags into Bags-of-primes,
or into Bags-of-distinct-primes, or into Bags-containing-a-prime.

⊗5 Interest⊗*

.BH

λλ Structure S is mildly interesting if all members of S satisfy the interesting
predicate P, or (equivalently) if they are all accidentally examples of
the interesting concept C, or (similarly) if all pairs of members of S
satisfy the interesting binary predicate P, etc.

Also: a KIND of structure is interesting if it appears that each instance of
such a structure satisfies the above condition (for a fixed P or C).

.E

.SRI1: BNN;

Thus a singleton is interesting because all pairs of members satisfy Equal.
The concept "Singletons" is interesting because each singleton is mildly
interesting in just that same way.

.BH

λλ A structure is mildly interesting if one member is very interesting.
Even better: exactly one  member.

Also: a KIND of structure is interesting if each instance satisfies the
above condition in the same way.

.E

.SRI2: BNN;

Thus the values of TIMES-1- are interesting because they always contain
precisely one bag which is a Singleton.

.ASSEC(Heuristics for dealing with Ordered-structures)

⊗5 Fillin⊗*

.BH

λλ To fill in some new examples of the ordered structure S, when some already exist,

Pick an existing example and randomly permute its members.

.ES

.BH

λλ To fill in specializations of a kind of ordered structure, 

add a new constraint that each pair of adjacent members must satisfy,
or a constraint on all ordered pairs of 
members in the order they appear in the structure.
Such a constraint is often merely a stipulation of being an example of an X,
for some interesting concept X.

.E

Thus Lists might be specialized into Alphabetized-lists.

⊗5 Check⊗*

.BH

λλ If the structure is to be accessed sequentially until some condition is met,
and if the precise ordering is superfluous,

Then keep the structure ordered by frequency of use, the most useful element first.

.E

This is a simple data-structure management trick. If you have several rules to use
in a certain situation, and rule R is one which usually succeeds, then put R first
in the list of rules to try. Similarly, in a pattern-matcher, try first the test
most likely to detect non-matching arguments.

.BH

λλ If structure S is always to be maintained in alphanumeric order,

Then AM can$$ due to the current LISP implementation of data-structures $ actually
maintain it as an unordered structure, if desired.

.E


⊗5 Interest⊗*

.BH

λλ An ordered structure S is interesting if each adjacent pair of members of S
satisfies predicate P (for some rare, interesting P).

.E

When AM is told about the relation "<", it immediately thinks that
any ⊗4sorted⊗* list of numbers 
is more interesting, due to the above rule.


.ASSEC(Heuristics for dealing with Unordered-structures)

⊗5 Check⊗*

.BH

λλ To check an example of an unordered structure,

Ensure that it is in alphanumerically-sorted order. If not, then SORT it.

.E

All unordered objects are maintained in lexicographic order,
so that two of them can be tested for equality using the LISP function EQUAL.
Becauseof this convention,
any two structures can therefore be tested for equality using this
fast list-structure comparator.

.ASSEC(Heuristics for dealing with Multiple-eles-structures)

⊗5 Fillin⊗*

.BH

λλ To fill in some new examples of the structure S, 
where S is a structure admitting multiple occurrences of the same element,
when some examples already exist,

Pick an existing example and randomly change the multiplicity with which various
members occur within the structure.

.ES

.ASSEC(Heuristics for dealing with Sets)

⊗5 Suggest⊗*

.BH

λλ If P is a very interesting predicate over X,

Then study these two sets: the ones for which P holds, and the ones for which
P fails.

.ES


⊗5 Interest⊗*

.BH

λλ A set S is interesting if, for some interesting predicate P,
whose domain is X,

S accidentally appears to be related strongly to 
{xεX | P(x)}, i.e., to those members of X satisfying P.

.ES